AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Chain-of-thought optimization

# Chain-of-thought optimization

Denker Mistral Nemo 12B
Apache-2.0
Denker is a small, uncensored, reasoning-focused model, fine-tuned from mistral-nemo-kartoffel-12B using ORPO and QLoRA.
Large Language Model Transformers
D
nbeerbower
20
2
Nova 0.5 R1 7B
Apache-2.0
High-performance reasoning model built on the OpenThoughts-114k-math dataset and other cognitive enhancement training sets
Large Language Model Transformers English
N
oscar128372
18
2
QWQ 32B FP8
Apache-2.0
QwQ-32B-FP8 is the FP8 quantized version of the QwQ-32B model, maintaining nearly the same accuracy as the BF16 version while supporting faster inference speed.
Large Language Model Transformers
Q
qingcheng-ai
144
6
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase